Skip to content

Conversation

@herniqeu
Copy link

@herniqeu herniqeu commented Dec 26, 2025

  • Replace list.pop(0) with deque.popleft() for O(1) queue dequeue
  • Cache compiled regex patterns in ResourceTemplate for URI matching
  • Cache field info mapping in FuncMetadata via lazy property
  • Throttle expired task cleanup with interval-based execution

These optimizations target high-frequency operations in message queuing, resource lookups, tool calls, and task store access.

Motivation and Context

How Has This Been Tested?

Breaking Changes

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

- Replace list.pop(0) with deque.popleft() for O(1) queue dequeue
- Cache compiled regex patterns in ResourceTemplate for URI matching
- Cache field info mapping in FuncMetadata via lazy property
- Throttle expired task cleanup with interval-based execution

These optimizations target high-frequency operations in message
queuing, resource lookups, tool calls, and task store access.
from mcp.shared.experimental.tasks.store import TaskStore
from mcp.types import Result, Task, TaskMetadata, TaskStatus

CLEANUP_INTERVAL_SECONDS = 1.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this would be a breaking change? Setting a throttling limit on the cleanup may not resolve all expired tasks and is timing-dependent, but this makes sense if the tasks are accessed very frequently (though I'm not sure what the benchmarks on this look like). How often does this happen/are there use cases that would be addressed with throttling cleanup?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: iiuc, these changes optimize repeated calls to pre_parse_json. I'm not sure what the protocol is for optimization-related tests, but it could be useful to add a small benchmark in the description on a ~20 field pydantic model and call pre_parse_json repeatedly (50k-100k before vs after).

@maxisbey
Copy link
Contributor

Thanks for the contribution, but closing this PR for now:

  1. No linked issue - There's no evidence these are actual performance problems. Please open an issue first describing the bottleneck you're seeing.

  2. Multiple unrelated changes - This bundles 4 separate optimizations (deque, regex caching, field info caching, cleanup throttling). These could potentially be separate PRs, or combined into one if there's a coherent issue describing the problem - but either way, an issue first would help.

  3. Unrelated modifications - The diff includes changes unrelated to the optimizations: removing docstrings, removing explanatory comments, and code style changes. These shouldn't be mixed in.

Happy to reconsider if you open an issue with benchmarks showing the problem.

AI Disclaimer

@maxisbey maxisbey closed this Dec 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants